Goto

Collaborating Authors

 attention pattern







dc6a7e655d7e5840e66733e9ee67cc69-AuthorFeedback.pdf

Neural Information Processing Systems

We thank all the reviewers for helpful suggestions. We will incorporate the following analysis into our revision. Firstly, we found 4 typical patterns shared by both, as shown in Figure 1. Attention patterns shared by XLNet and BERT . Rows and columns represent query and key respectively.


Exposing Attention Glitches with Flip-Flop Language Modeling

Neural Information Processing Systems

This simple generative task requires a model to copy binary symbols over long-range dependencies, ignoring the tokens in between. We find that Transformer FFLMs suffer from a long tail of sporadic reasoning errors, some of which we can eliminate using various regularization techniques.


Exposing Attention Glitches with Flip-Flop Language Modeling

Neural Information Processing Systems

This simple generative task requires a model to copy binary symbols over long-range dependencies, ignoring the tokens in between. We find that Transformer FFLMs suffer from a long tail of sporadic reasoning errors, some of which we can eliminate using various regularization techniques.



Faster Neighborhood Attention: Reducing the O(n 2) Cost of Self Attention at the Threadblock Level

Neural Information Processing Systems

Neighborhood attention reduces the cost of self attention by restricting each token's attention span to its nearest neighbors. This restriction, parameterized by a window size and dilation factor, draws a spectrum of possible attention patterns between linear projection and self attention. Neighborhood attention, and more generally sliding window attention patterns, have long been bounded by infrastructure, particularly in higher-rank spaces (2-D and 3-D), calling for the development of custom kernels, which have been limited in either functionality, or performance, if not both. In this work, we aim to massively improve upon existing infrastructure by providing two new methods for implementing neighborhood attention. We first show that neighborhood attention can be represented as a batched GEMM problem, similar to standard attention, and implement it for 1-D and 2-D neighborhood attention.